152 research outputs found
Interference Coordination in Heterogeneous Networks: Stochastic Geometry Based Modelling and Performance Analysis
Recently data traffic has experienced explosive increase with the proliferation of wireless devices and the popularity of media-based free services. The academic and industry of mobile communications have predicted an estimated x increase in traffic volume for the forthcoming 5G networks. This traffic explosion stimulates the deployment of heterogeneous networks (HetNets) with small cells (SCs) underlying in the traditional macrocells, which has been considered as a promising technique to contribute to the x traffic capacity gain. Initially, licensed spectrum bands are expected to be used in SCs, thus the SC deployment introduces the cross-tier interference between SCs and macrocells, which degrades the downlink signal to interference plus noise ratio (SINR) of user equipments (UEs) severely, especially for the edge UEs in a ultra-densely deployed scenario. To alleviate this cross-tier interference between SCs and macrocells, unlicensed spectrum bands are advocated to be used in SCs. Specifically, with the aid of carrier aggregation, the gigahertz (GHz) unlicensed band has become an option for SCs in the Long Term Evolution (LTE)-Unlicensed (LTE-U) scheme, but the Ghz unlicensed band has already been used by WiFi networks. Thus downlink cross-tier interference also occurs between LTE-U and WiFi networks. Accordingly, downlink cross-tier interference is inevitable no matter licensed or unlicensed spectrum band (i.e., 5 GHz) is used in SCs, and interference coordination schemes, such as further enhanced inter-cell interference coordination (FeICIC) for macrocells and SCs, and Licensed Assisted Access (LAA) for WiFi networks and LTE-U networks, have been proposed to mitigate these cross-tier interferences. In this dissertation, we mainly focus on the modelling and performance analysis of HetNets with the aforementioned two interference coordination schemes (i.e., FeICIC and LTE-LAA) under the stochastic geometry framework.
Firstly, as the configuration of reduced power subframe (RPS)-related parameters was not well investigated in a two-tier HetNet adopting RPSs and cell range expansion (CRE), we derive the analytical expressions of the downlink coverage probability and rate coverage probability in such a HetNet. The optimal settings for the area of macrocell center regions, the area of SC range expansion regions, and the transmit power of RPSs for maximizing the rate coverage probability are analysed. As compared with the rate coverage probability in the two-tier HetNet with almost blank subframes (ABSs), which is proposed in the previous version of FeICIC, i.e., the enhanced inter-cell interference coordination (eICIC), the results show that ABSs outperform RPSs in terms of the rate coverage probability in the two-tier HetNet with the optimal range expansion bias, but lead to a heavier burden on the SC backhaul. However, with static typical range expansion biases, RPSs provide better rate coverage probability than ABSs in the two-tier HetNet.
Secondly, the conventional FeICIC scheme ignores the potential of RPSs being adopted in both tiers of a two-tier HetNet without CRE, which is envisioned to improve the SINR level of edge UEs in both tiers. Accordingly, we study the downlink coverage probability and rate coverage probability of a two-tier HetNet applying with our proposed scheme. The results reveal that adopting RPSs in both tiers not only improves the coverage probabilities of edge UEs, but also increases the rate coverage probability of the whole two-tier HetNet.
Thirdly, in both previous works, strict subframe alignment (SA) was assumed throughout the whole network, which is difficult to maintain between neighbouring cells in reality. Consequently, we propose a novel subframe misalignment (SM) model for a two-tier HetNet adopting RPSs with SM offsets restricted within a subframe duration, and analyse the coverage probability under the effects of RPSs and SM. The numerical results indicate that the strict SA requirement can be relaxed by up to of the subframe duration with a loss of below in terms of the downlink coverage probability.
Lastly, since stochastic-geometry-based analysis of the coexisting LTE-LAA and WiFi networks, which adopt the carrier-sense multiple access with collision avoidance (CSMA/CA) as the medium access control (MAC) scheme and share multiple unlicensed channels (UCs), was missing, we analyse the downlink throughput and spectral efficiency (SE) of the coexisting LTE-LAA and WiFi networks versus the network density and the number of UCs based on the Matern hard core process. The throughput and SE are obtained as functions of the downlink successful transmission probability (STP), of which analytical expressions are derived for both LTE-LAA and WiFi UEs. The results show that the throughput and SE of the whole coexisting LTE-LAA and WiFi networks can be improved significantly with an increasing number of accessible UCs. Based on the numerical results, insights into the trade-off between the throughput and SE against the number of accessible UCs are provided.
All the derived results have been validated by Monte Carlo simulation in Matlab, and the conclusions observed from the results can provide guidelines for the future deployments of the FeICIC and LTE-LAA interference coordination schemes in HetNets
PAGE: Equilibrate Personalization and Generalization in Federated Learning
Federated learning (FL) is becoming a major driving force behind machine
learning as a service, where customers (clients) collaboratively benefit from
shared local updates under the orchestration of the service provider (server).
Representing clients' current demands and the server's future demand, local
model personalization and global model generalization are separately
investigated, as the ill-effects of data heterogeneity enforce the community to
focus on one over the other. However, these two seemingly competing goals are
of equal importance rather than black and white issues, and should be achieved
simultaneously. In this paper, we propose the first algorithm to balance
personalization and generalization on top of game theory, dubbed PAGE, which
reshapes FL as a co-opetition game between clients and the server. To explore
the equilibrium, PAGE further formulates the game as Markov decision processes,
and leverages the reinforcement learning algorithm, which simplifies the
solving complexity. Extensive experiments on four widespread datasets show that
PAGE outperforms state-of-the-art FL baselines in terms of global and local
prediction accuracy simultaneously, and the accuracy can be improved by up to
35.20% and 39.91%, respectively. In addition, biased variants of PAGE imply
promising adaptiveness to demand shifts in practice
Modelling and Performance Analysis of the Over-the-Air Computing in Cellular IoT Networks
Ultra-fast wireless data aggregation (WDA) of distributed data has emerged as
a critical design challenge in the ultra-densely deployed cellular internet of
things network (CITN) due to limited spectral resources. Over-the-air computing
(AirComp) has been proposed as an effective solution for ultra-fast WDA by
exploiting the superposition property of wireless channels. However, the effect
of access radius of access point (AP) on the AirComp performance has not been
investigated yet. Therefore, in this work, the mean square error (MSE)
performance of AirComp in the ultra-densely deployed CITN is analyzed with the
AP access radius. By modelling the spatial locations of internet of things
devices as a Poisson point process, the expression of MSE is derived in an
analytical form, which is validated by Monte Carlo simulations. Based on the
analytical MSE, we investigate the effect of AP access radius on the MSE of
AirComp numerically. The results show that there exists an optimal AP access
radius for AirComp, which can decrease the MSE by up to 12.7%. It indicates
that the AP access radius should be carefully chosen to improve the AirComp
performance in the ultra-densely deployed CITN
On the performance of an integrated communication and localization system: an analytical framework
Quantifying the performance bound of an integrated localization and
communication (ILAC) system and the trade-off between communication and
localization performance is critical. In this letter, we consider an ILAC
system that can perform communication and localization via time-domain or
frequency-domain resource allocation. We develop an analytical framework to
derive the closed-form expression of the capacity loss versus localization
Cramer-Rao lower bound (CRB) loss via time-domain and frequency-domain resource
allocation. Simulation results validate the analytical model and demonstrate
that frequency-domain resource allocation is preferable in scenarios with a
smaller number of antennas at the next generation nodeB (gNB) and a larger
distance between user equipment (UE) and gNB, while time-domain resource
allocation is preferable in scenarios with a larger number of antennas and
smaller distance between UE and the gNB.Comment: 5 pages, 3 figure
Adaptive Communications in Collaborative Perception with Domain Alignment for Autonomous Driving
Collaborative perception among multiple connected and autonomous vehicles can
greatly enhance perceptive capabilities by allowing vehicles to exchange
supplementary information via communications. Despite advances in previous
approaches, challenges still remain due to channel variations and data
heterogeneity among collaborative vehicles. To address these issues, we propose
ACC-DA, a channel-aware collaborative perception framework to dynamically
adjust the communication graph and minimize the average transmission delay
while mitigating the side effects from the data heterogeneity. Our novelties
lie in three aspects. We first design a transmission delay minimization method,
which can construct the communication graph and minimize the transmission delay
according to different channel information state. We then propose an adaptive
data reconstruction mechanism, which can dynamically adjust the rate-distortion
trade-off to enhance perception efficiency. Moreover, it minimizes the temporal
redundancy during data transmissions. Finally, we conceive a domain alignment
scheme to align the data distribution from different vehicles, which can
mitigate the domain gap between different vehicles and improve the performance
of the target task. Comprehensive experiments demonstrate the effectiveness of
our method in comparison to the existing state-of-the-art works.Comment: 6 pages, 6 figure
Complexity Matters: Rethinking the Latent Space for Generative Modeling
In generative modeling, numerous successful approaches leverage a
low-dimensional latent space, e.g., Stable Diffusion models the latent space
induced by an encoder and generates images through a paired decoder. Although
the selection of the latent space is empirically pivotal, determining the
optimal choice and the process of identifying it remain unclear. In this study,
we aim to shed light on this under-explored topic by rethinking the latent
space from the perspective of model complexity. Our investigation starts with
the classic generative adversarial networks (GANs). Inspired by the GAN
training objective, we propose a novel "distance" between the latent and data
distributions, whose minimization coincides with that of the generator
complexity. The minimizer of this distance is characterized as the optimal
data-dependent latent that most effectively capitalizes on the generator's
capacity. Then, we consider parameterizing such a latent distribution by an
encoder network and propose a two-stage training strategy called Decoupled
Autoencoder (DAE), where the encoder is only updated in the first stage with an
auxiliary decoder and then frozen in the second stage while the actual decoder
is being trained. DAE can improve the latent distribution and as a result,
improve the generative performance. Our theoretical analyses are corroborated
by comprehensive experiments on various models such as VQGAN and Diffusion
Transformer, where our modifications yield significant improvements in sample
quality with decreased model complexity.Comment: Accepted to NeurIPS 2023 (Spotlight
High-precision, non-invasive anti-microvascular approach via concurrent ultrasound and laser irradiation
Antivascular therapy represents a proven strategy to treat angiogenesis. By applying synchronized ultrasound bursts and nanosecond laser irradiation, we developed a novel, selective, non-invasive, localized antivascular method, termed photo-mediated ultrasound therapy (PUT). PUT takes advantage of the high native optical contrast among biological tissues and can treat microvessels without causing collateral damage to the surrounding tissue. In a chicken yolk sac membrane model, under the same ultrasound parameters (1 MHz at 0.45 MPa and 10 Hz with 10% duty cycle), PUT with 4 mJ/cm2 and 6 mJ/cm2 laser fluence induced 51% (p = 0.001) and 37% (p = 0.018) vessel diameter reductions respectively. With 8 mJ/cm2 laser fluence, PUT would yield vessel disruption (90%, p < 0.01). Selectivity of PUT was demonstrated by utilizing laser wavelengths at 578 nm or 650 nm, where PUT selectively shrank veins or occluded arteries. In a rabbit ear model, PUT induced a 68.5% reduction in blood perfusion after 7 days (p < 0.001) without damaging the surrounding cells. In vitro experiments in human blood suggested that cavitation may play a role in PUT. In conclusion, PUT holds significant promise as a novel non-invasive antivascular method with the capability to precisely target blood vessels.R01AR060350R01CA1867694K12EY022299-4BL2014089
- …